76 research outputs found

    Can Large Language Models design a Robot?

    Full text link
    Large Language Models can lead researchers in the design of robots.Comment: Under revie

    Model Based Control of Soft Robots: A Survey of the State of the Art and Open Challenges

    Full text link
    Continuum soft robots are mechanical systems entirely made of continuously deformable elements. This design solution aims to bring robots closer to invertebrate animals and soft appendices of vertebrate animals (e.g., an elephant's trunk, a monkey's tail). This work aims to introduce the control theorist perspective to this novel development in robotics. We aim to remove the barriers to entry into this field by presenting existing results and future challenges using a unified language and within a coherent framework. Indeed, the main difficulty in entering this field is the wide variability of terminology and scientific backgrounds, making it quite hard to acquire a comprehensive view on the topic. Another limiting factor is that it is not obvious where to draw a clear line between the limitations imposed by the technology not being mature yet and the challenges intrinsic to this class of robots. In this work, we argue that the intrinsic effects are the continuum or multi-body dynamics, the presence of a non-negligible elastic potential field, and the variability in sensing and actuation strategies.Comment: 69 pages, 13 figure

    An Experimental Study of Model-based Control for Planar Handed Shearing Auxetics Robots

    Full text link
    Parallel robots based on Handed Shearing Auxetics (HSAs) can implement complex motions using standard electric motors while maintaining the complete softness of the structure, thanks to specifically designed architected metamaterials. However, their control is especially challenging due to varying and coupled stiffness, shearing, non-affine terms in the actuation model, and underactuation. In this paper, we present a model-based control strategy for planar HSA robots enabling regulation in task space. We formulate equations of motion, show that they admit a collocated form, and design a P-satI-D feedback controller with compensation for elastic and gravitational forces. We experimentally identify and verify the proposed control strategy in closed loop.Comment: 12 pages, 10 figure

    Deep Metric Imitation Learning for Stable Motion Primitives

    Full text link
    Imitation Learning (IL) is a powerful technique for intuitive robotic programming. However, ensuring the reliability of learned behaviors remains a challenge. In the context of reaching motions, a robot should consistently reach its goal, regardless of its initial conditions. To meet this requirement, IL methods often employ specialized function approximators that guarantee this property by construction. Although effective, these approaches come with a set of limitations: 1) they are unable to fully exploit the capabilities of modern Deep Neural Network (DNN) architectures, 2) some are restricted in the family of motions they can model, resulting in suboptimal IL capabilities, and 3) they require explicit extensions to account for the geometry of motions that consider orientations. To address these challenges, we introduce a novel stability loss function, drawing inspiration from the triplet loss used in the deep metric learning literature. This loss does not constrain the DNN's architecture and enables learning policies that yield accurate results. Furthermore, it is easily adaptable to the geometry of the robot's state space. We provide a proof of the stability properties induced by this loss and empirically validate our method in various settings. These settings include Euclidean and non-Euclidean state spaces, as well as first-order and second-order motions, both in simulation and with real robots. More details about the experimental results can be found at: https://youtu.be/ZWKLGntCI6w.Comment: 21 pages, 15 figures, 4 table

    Coordination of unmanned marine vehicles for asymmetric threats protection

    Get PDF
    A coordination protocol for systems of unmanned marine vehicles is proposed for protection against asymmetric threats. The problem is first mod- elled in a game theoretic framework, as a potential game. Then an extension of existing learning algo- rithms is proposed to address the problem of tracking the possibly moving threat. The approach is evaluated in scenarios of different geometric complexity such as open sea, bay, and harbours. Performance of the approach is evaluated in terms of a security index that will allow us to obtain a tool for team sizing. The tool provides the minimum number of marine vehicles to be used in the system, given a desired security level to be guaranteed and the maximum threat velocity

    Design and Assessment of Control Maps for Multi-Channel sEMG-Driven Prostheses and Supernumerary Limbs

    Get PDF
    Proportional and simultaneous control algorithms are considered as one of the most effective ways of mapping electromyographic signals to an artificial device. However, the applicability of these methods is limited by the high number of electromyographic features that they require to operate—typically twice as many the actuators to be controlled. Indeed, extracting many independent electromyographic signals is challenging for a number of reasons—ranging from technological to anatomical. On the contrary, the number of actively moving parts in classic prostheses or extra-limbs is often high. This paper faces this issue, by proposing and experimentally assessing a set of algorithms which are capable of proportionally and simultaneously control as many actuators as there are independent electromyographic signals available. Two sets of solutions are considered. The first uses as input electromyographic signals only, while the second adds postural measurements to the sources of information. At first, all the proposed algorithms are experimentally tested in terms of precision, efficiency, and usability on twelve able-bodied subjects, in a virtual environment. A state-of-the-art controller using twice the amount of electromyographic signals as input is adopted as benchmark. We then performed qualitative tests, where the maps are used to control a prototype of upper limb prosthesis. The device is composed of a robotic hand and a wrist implementing active prono-supination movement. Eight able-bodied subjects participated to this second round of testings. Finally, the proposed strategies were tested in exploratory experiments involving two subjects with limb loss. Results coming from the evaluations in virtual and realistic settings show encouraging results and suggest the effectiveness of the proposed approach

    Robotic Fabric Flattening with Wrinkle Direction Detection

    Full text link
    Deformable Object Manipulation (DOM) is an important field of research as it contributes to practical tasks such as automatic cloth handling, cable routing, surgical operation, etc. Perception is considered one of the major challenges in DOM due to the complex dynamics and high degree of freedom of deformable objects. In this paper, we develop a novel image-processing algorithm based on Gabor filters to extract useful features from cloth, and based on this, devise a strategy for cloth flattening tasks. We evaluate the overall framework experimentally, and compare it with three human operators. The results show that our algorithm can determine the direction of wrinkles on the cloth accurately in the simulation as well as the real robot experiments. Besides, the robot executing the flattening tasks using the dewrinkling strategy given by our algorithm achieves satisfying performance compared to other baseline methods. The experiment video is available on https://sites.google.com/view/robotic-fabric-flattening/hom

    DeepDynamicHand: A Deep Neural Architecture for Labeling Hand Manipulation Strategies in Video Sources Exploiting Temporal Information

    Get PDF
    Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g., during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks. More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g., for planning and control of soft anthropomorphic manipulators

    DeepDynamicHand: A Deep Neural Architecture for Labeling Hand Manipulation Strategies in Video Sources Exploiting Temporal Information

    Get PDF
    Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g., during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks. More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g., for planning and control of soft anthropomorphic manipulators

    Dexterity augmentation on a synergistic hand: the Pisa/IIT SoftHand+

    Get PDF
    Soft robotics and under-actuation were recently demonstrated as good approaches for the implementation of humanoid robotic hands. Nevertheless, it is often difficult to increase the number of degrees of actuation of heavily under-actuated hands without compromising their intrinsic simplicity. In this paper we analyze the Pisa/IIT SoftHand and its underlying logic of adaptive synergies, and propose a method to double its number of degree of actuation, with a very reduced impact on its mechanical complexity. This new design paradigm is based on constructive exploitation of friction phenomena. Based on this method, a novel prototype of under-actuated robot hand with two degrees of actuation is proposed, named Pisa/IIT SoftHand+. A preliminary validation of the prototype follows, based on grasping and manipulation examples of some object
    corecore